Asynchronous Gradient Push

نویسندگان

چکیده

We consider a multi-agent framework for distributed optimization where each agent has access to local smooth strongly convex function, and the collective goal is achieve consensus on parameters that minimize sum of agents' functions. propose an algorithm wherein operates asynchronously independently other agents. When functions are strongly-convex with Lipschitz-continuous gradients, we show iterates at converge neighborhood global minimum, size depends degree asynchrony in network. agents work same rate, convergence minimizer achieved. Numerical experiments demonstrate Asynchronous Gradient-Push can objective faster than state-of-the-art synchronous first-order methods, more robust failing or stalling agents, scales better network size.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Asynchronous Multicast Push: AMP

We propose a new transport paradigm, called asynchronous multicast push (AMP) that leads to much more efficient use of network bandwidth, to higher throughput, and to reduced server load. AMP decouples request and delivery for frequently requested data and uses multicast delivery. We derive quantitatively by how much the server load and the network load is reduced by introducing an asynchronism...

متن کامل

Asynchronous Subgradient-Push

We consider a multi-agent framework for distributed optimization where each agent in the network has access to a local convex function and the collective goal is to achieve consensus on the parameters that minimize the sum of the agents’ local functions. We propose an algorithm wherein each agent operates asynchronously and independently of the other agents in the network. When the local functi...

متن کامل

Asynchronous Accelerated Stochastic Gradient Descent

Stochastic gradient descent (SGD) is a widely used optimization algorithm in machine learning. In order to accelerate the convergence of SGD, a few advanced techniques have been developed in recent years, including variance reduction, stochastic coordinate sampling, and Nesterov’s acceleration method. Furthermore, in order to improve the training speed and/or leverage larger-scale training data...

متن کامل

Convergence of Synchronous and Asynchronous – Push Technology Meets Online Discussions

Online discussions are an integral and important component of e-learning environments. However, current models can suffer from information overload, delayed response and a lack of participation. This paper examines the potential for synchronous mobile technology to increase the participation and effectiveness of asynchronous online discussions. 1. The educational need Online discussions are an ...

متن کامل

Asynchronous Parallel Stochastic Gradient for Nonconvex Optimization

Asynchronous parallel implementations of stochastic gradient (SG) have been broadly used in solving deep neural network and received many successes in practice recently. However, existing theories cannot explain their convergence and speedup properties, mainly due to the nonconvexity of most deep learning formulations and the asynchronous parallel mechanism. To fill the gaps in theory and provi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE Transactions on Automatic Control

سال: 2021

ISSN: ['0018-9286', '1558-2523', '2334-3303']

DOI: https://doi.org/10.1109/tac.2020.2981035